Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Integrating posterior probability calibration training into text classification algorithm
Jing JIANG, Yu CHEN, Jieping SUN, Shenggen JU
Journal of Computer Applications    2022, 42 (6): 1789-1795.   DOI: 10.11772/j.issn.1001-9081.2021091638
Abstract252)   HTML7)    PDF (738KB)(56)       Save

The pre-training language models used for text representation have achieved high accuracy on various text classification tasks, but the following problems still remain: on the one hand, the category with the largest posterior probability is selected as the final classification result of the model after calculating the posterior probabilities on all categories in the pre-training language model. However, in many scenarios, the quality of the posterior probability itself can provide more reliable information than the final classification result. On the other hand, the classifier of the pre-training language model has performance degradation when assigning different labels to texts with similar semantics. In response to the above two problems, a model combining posterior probability calibration and negative example supervision named PosCal-negative was proposed. In PosCal-negative model, the difference between the predicted probability and the empirical posterior probability was dynamically penalized in an end-to-and way during the training process, and the texts with different labels were used to realize the negative supervision of the encoder, so that different feature vector representations were generated for different categories. Experimental results show that the classification accuracies of the proposed model on two Chinese maternal and child care text classification datasets MATINF-C-AGE and MATINF-C-TOPIC reach 91.55% and 69.19% respectively, which are 1.13 percentage points and 2.53 percentage points higher than those of Enhanced Representation through kNowledge IntEgration (ERNIE) model respectively.

Table and Figures | Reference | Related Articles | Metrics
Recommendation system based on non-sampling collaborative knowledge graph network
Wenjing JIANG, Xi XIONG, Zhongzhi LI, Binyong LI
Journal of Computer Applications    2022, 42 (4): 1057-1064.   DOI: 10.11772/j.issn.1001-9081.2021071255
Abstract362)   HTML21)    PDF (679KB)(223)       Save

Knowledge Graph (KG) can effectively extract information by efficiently organizing massive data. Therefore, recommendation methods based on knowledge graph have been widely studied and applied. Aiming at the sampling error problem of graph neural network in knowledge graph modeling, a method of Non-sampling Collaborative Knowledge graph Network (NCKN) was proposed. Firstly, a non-sampling knowledge dissemination module was designed, in which linear aggregators with different sizes were used in a single convolutional layer to capture deep-level information and achieve efficient non-sampling pre-computation. Then, in order to distinguish the contribution degrees of neighbor nodes, attention mechanism was introduced in the dissemination process. Finally, the collaboration signal of user interaction and knowledge embedding were combined in the collaborative dissemination module to better describe user preferences. Based on three real datasets, the performance of NCKN in CTR (Click Through Rate) prediction and Top-k was evaluated. The experimental results show that compared with the mainstream algorithms RippleNet (Ripple Network) and KGCN (Knowledge Graph Convolutional Network), the accuracy of NCKN in CTR prediction increases by 2.71% and 4.60%, respectively; in the Top-k forecast, prediction, the accuracy of NCKN increases by 5.26% and 3.91% on average respectively. The proposed method not only solves the sampling error problem of graph neural network in knowledge map modeling, but also improves the accuracy of the recommended model.

Table and Figures | Reference | Related Articles | Metrics